专利摘要:
METHOD AND SYSTEM FOR SHARING DIGITAL MEDIA CONTENT Methods and systems for generating and sharing media clips are described. According to some modalities, while a selection of digital media content (for example, a movie, television show, audio track, and so on) is being presented on a media player, a user creates one or more sets reference points (for example, time markers) that define the boundaries (for example, start and end) of one or more media clips. These reference points are communicated from one media player to another, allowing the receiving media player to retrieve and play back media clips from a source other than the media player where reference points have been generated.
公开号:BR112012005499B1
申请号:R112012005499-6
申请日:2010-09-10
公开日:2020-12-01
发明作者:Crx K. Chai;Alex Fishman
申请人:Opentv, Inc.;
IPC主号:
专利说明:

TECHNICAL FIELD
[001] The present invention claims the priority benefit of US Patent Application Serial No. 12 / 878,901, filed on September 9, 2010, and also claims the priority benefit of US Provisional Patent Application Serial No. 61 / 241,276 , deposited on September 10, 2009, the applications that are incorporated herein by reference in their entirety. TECHNICAL FIELD
[002] The present invention in general refers to systems and applications of digital media content. More specifically, the present invention relates to methods and systems for sharing a portion (for example, one or more video clips) of a particular selection of digital media content. BACKGROUND
[003] Despite significant improvements in computer network technologies, broadcast audio and video systems, and digital media playback devices, it remains a challenge to share a part of a specific selection of digital media content with another person. . For example, when using a conventional media player to listen to or watch the flow of digital media content, such as a movie, television program, news broadcast, sporting event, or user-generated program, a user can identify a specific piece of content that the user would like to share with someone else. A user may wish to share a single scene from a movie with another person, a particular news segment from a news program, or just the turn of a baseball game in which a particular team has scheduled a journey. Most conventional media playback devices do not have a mechanism that will allow the user to share a portion of the digital media content - referred to here as a media clip - with another person who is not present at the time and place with the viewer.
[004] Some media playback devices provide the ability to record digital media content that is being streamed to, and presented on, the digital media player. However, these media playback devices provide content recording capabilities primarily to allow time shifting - recording a program to a storage medium to be seen or heard at a time more convenient for the user. Most media playback devices with content recording capabilities do not provide the ability to transfer the recorded digital media content to another device for playback on another device.
[005] Another class of media playback devices allows for functionality that is commonly referred to as location shifting. Location shifting involves redirecting a stream of digital media content from a first media player to a second media player. For example, in a typical use case, a television converter receives digital content through a broadcast network (for example, a television or radio network) and redirects the received stream of digital content through a computer network ( for example, an Internet protocol, or IP-based network) for a mobile or personal media player, such as a mobile phone or notebook computer. For location shifting to work properly, the network connection between the first media player and the second media player needs a sufficient bandwidth and flow rate to support the transfer of digital media content In real time. Given the size (for example, the amount of data) of the computer files involved, particularly with digital content encoded in some high-quality formats (for example, high-definition formats), location shifting is not always a viable option .
[006] Some media playback devices may have feature sets that allow you to change both time and location. For example, a stream of digital media content that has been previously recorded to a first media player (for example, a television converter) may be accessible from a remote media player, so that it can be transmitted from the first media player to the remote media player at a time of user selection. However, here too, the network connection between the two devices must be sufficient to support the near real-time flow of large computer files. In addition, with conventional time and location shifting devices, the user does not have a convenient way to share just a few parts (for example, media clips) of a selection of digital media content. DESCRIPTION OF THE DRAWINGS
[007] Some modalities are illustrated by way of example and not by limitation in the figures in the accompanying drawings, in which: Figure 1 illustrates an example of a timeline view of a graphical representation of a selection of digital media content, like a movie, having several scenes of interest; Figure 2 illustrates an example of a timeline view of a graphical representation of a selection of digital audio content, such as a news program or class lecture, having several parts of interest; Figure 3 illustrates an example of a timeline view of a graphical representation of a selection of digital content including reference points that define the limits for the reproduction of three different media clips, according to an example modality; Figure 4 illustrates an implementation of a digital content distribution system, according to an example modality; Figure 5 illustrates an example of a functional block diagram of a media player, according to an example embodiment; Figure 6 illustrates an example method, according to an example modality, for sharing one or more media clips; Figure 7 is a block diagram of a machine, in the form of a computer system (for example, a media player, or the content source device) within which a set of instructions, to make the machine executes any one or more of the methods discussed here, it can be executed; Figure 8 is a schematic representation of an example interactive television environment within which certain aspects of the subject matter of the invention described herein can be implanted; and Figure 9 is a block diagram providing architectural details about a broadcast server, a modulation box, a converter for television and an optional storage device according to an example modality. DETAILED DESCRIPTION
[008] Methods and systems for sharing media clips are described. In the following description, for the sake of explanation, numerous specific details are presented in order to provide a complete understanding of the various aspects of different embodiments of the present invention. It will be apparent, however, to one skilled in the art, that the present invention can be practiced without these specific details.
[009] According to some example modalities, while digital media content is being transmitted, and / or presented on a digital media player, a user who is viewing or listening to digital media content can create a pair of bookmarks of time that together define the start and end limits for a portion of the digital media content being presented. A portion of the digital media content identified by these timers is referred to here as a "video clip", "audio clip", "media clip", or simply a "clip". The time markers that define the limits (for example, start and end) of a clip are referred to here collectively as "reference points". More specifically, a reference point that represents the start limit of a clip is referred to here as an "entry point", while a reference point that represents the end limit of the clip is referred to here as an "exit point" . In the context of the present invention, a selection of digital media content is simply a single unit of media content, such as a film title, a television program, a news program, a sporting event, a song, a lecture. class, a collection of home videos, and so on.
[0010] After a user has identified one or more pairs of landmarks, where each pair defines a media clip, the user can invoke a command directing the media player to communicate the landmarks to another media player ( for example, a target breeder). In some example modalities, the reference points will be communicated to the target player, together with the support of metadata. Metadata support can, for example, identify various attributes or characteristics of the digital media content for which the landmarks were generated. For example, in some example modalities, metadata may include a content identifier that indicates specific digital media content (for example, title and / or track) and version, or format, in which the reference points refer . In addition, metadata can include a content source identifier that identifies a content source, where the selection of media content from which media clips are generated can be accessed and transmitted. In some example embodiments, metadata may include data representing a single frame of a video clip, which can be used as a "thumbnail" image as a graphical representation of the video clip in a user interface on the receiver (for example , the target) of the media player. In some example modalities, metadata can be part of landmarks, and in some example modalities, metadata can be stored separately from landmarks. In any case, the combination of landmarks and metadata support provides a target player with all the information needed to order digital content from a content source, and then present the media clips, just like defined by reference points
[0011] In some example modalities, the target player, which received the reference points, will communicate a request for content to a content source identified in the metadata received with the reference points. In some example modalities, the content request communicated from the target player to the content source will include reference points, thus making it possible for the content source to extract media clips from the content of the requested material, and communicate only the data representing the media clips defined by the landmarks. In an alternative example, the content request communicated from the target player to the content source will not include reference points. In such an example, the target player will receive a full version of the requested digital content, and will use the reference points to extract the relevant media clips for the target player.
[0012] In various examples, there may be a wide variety of mechanisms through which reference points can be defined. For example, when implementing a television converter, reference points can be set by simply pressing a button on a remote control device, such as a conventional remote infrared control, or a virtual remote control application running on a mobile phone. Wi-Fi ® connected. For example, a remote control device may have dedicated buttons (or "hard-wired" or soft programmable buttons) to define reference points (entry points, and / or exit points). In such an implementation, the television converter may have a signal receiver to receive a signal (for example, infrared, radio frequency, Bluetooth or Wi-Fi) that contains a command to establish a reference point. The command, once received, is processed to determine the reference point that identifies both the start or end limit of a media clip that corresponds to the media selection being presented at the time the command was invoked. The data at the reference point can be as simple as a time or offset reference, which identifies a particular time point in the content, in relation to the beginning of the content. In alternative example modalities, the command that is processed to generate a reference point can itself be generated in other ways. For example, with a portable media player implementation, the portable media player can have a dedicated button that, when pressed, calls a command to define a reference point. In some example modalities, separate buttons may exist - one for entry points and one for exit points. In alternative example modalities, a single button can be used for defining reference points, so that the first time the key is pressed, an entry point is defined, and the second time the key is pressed, a point output is set. In another example, a media player with a touchscreen may have user interface (UI) buttons that can be displayed on the touchscreen, such that when pressed, the user interface buttons allow the user to define the reference points. Those skilled in the art will readily recognize that there are a large number of alternative input mechanisms that can be used, according to alternative example modalities, to invoke a command to generate a reference point for the media playback device that features the digital media content.
[0013] Once generated, reference points can be communicated to another media player over a public or private communications network, wired or wireless. The communications network over which landmarks are communicated can be a conventional computer network, such as the Internet, or a proprietary network. In some example embodiments, a media player can use short-range communication technology, such as Bluetooth, Near Field Communication (NFC) or infrared, to communicate landmarks to other media player devices that are in the relatively narrow range. For example, a user may have a favorite media clip file (defined by landmarks) stored on a mobile media player (for example, a cell phone, a compressed computer, a personal media player, and so on) ). When the user is within range of another media player (for example, a television converter), the user can use short-range communication technology to transfer the landmarks that define one or more media clips to another player from media. Because the media clips on the mobile media player are stored as reference points, the transfer takes place very quickly. Once landmarks are received at the target media player, the target media player can use the landmarks to extract the relevant media clip from a locally stored copy of the digital media content selection, or, alternatively, use a different communications network to download the relevant content and display the content, as defined by the landmarks. Therefore, with some modalities, the content can be transmitted or downloaded from a remote content source, while in other example modalities, previously downloaded and the stored content can be processed to extract and reproduce only the parts defined by the points of reference. Other aspects of the various example modalities are presented in connection with the description of the figures that follow.
[0014] Figure 1 illustrates an example of a timeline view 10 of a graphical representation of a selection of digital media content 12, such as a movie, having several scenes of interest 14, 16 and 18. For example, moving from left to right along the reference line with the number 20 represents the passage of time. Likewise, moving from left to right along the graphical representation of digital media content 12 coincides with the chronological order in which digital media content is to be presented. Therefore, the leftmost edge of the graphic representation of the media content represents, for example, the beginning of the content (for example, the film). The rightmost edge of the graphic representation of the content represents the end of the content.
[0015] In this example, there are three scenes of interest 14, 16 and 18 that the viewer would like to share with someone else. As illustrated in Figure 1, the graphical representation of digital content 12, the three scenes of interest 14, 16 and 18 are represented by a single additional frame having a width that represents the duration of time for the scene, in relation to the time period for the entire digital media content. In this example, the line with reference number 22 represents the frame currently displayed, and therefore the current playback position of the digital media content. For example, the line with reference number 22 corresponds to the image that is shown in the example display 24.
[0016] Similar to Figure 1, Figure 2 illustrates an example seen in timeline 24 of a graphical representation of an audio track 26, having three distinct parts (for example, parts 28, 30 and 32) that are of interest for a user. Similar to the graphical representation of film 12 represented in Figure 1, in Figure 2, the audio track is represented graphically as box 26 with three parts that are of interest to a listener of the audio track. In this example, the audio track plays over a set of 34 speakers. The audio track can be a song, a program (for example, news program) recorded from a radio broadcast, a classroom lecture , or any audio recording. In this example, the three parts of interest 28, 30 and 32 are represented by boxes with a width that represents the length of time for the respective parts of interest, in relation to the time period for the audio track.
[0017] Figure 3 illustrates an example of a timeline view 40 of a graphical representation of a selection of digital media content 42 (for example, a movie) including reference points that define the limits for the reproduction of three different media clips 44, 46 and 48, according to an example modality. As shown in Figure 3, the graphical representation of the media content 42 includes three clips 46, 48 and 50. The first clip 44 is defined by a pair of reference points, including the entry point 50 and the exit point 52. The second clip 46 is defined by a second pair of reference points, including the entry point 54 and the point and exit 56. Finally, the third clip is defined by a pair of reference points, including the entry point 58 and the exit point 60. The three pairs of reference points define three media clips from the same selection of digital media content 42.
[0018] In some example embodiments, landmarks include, or are otherwise associated with, metadata that, for example, may include a content identifier that identifies the selected digital media content (e.g., movie, by title), the particular version or format of the digital content, and a content source identifier identifies a content source where the digital content can be accessed. With some example modalities, the metadata may also include a very short track (for example, a frame, or a few seconds of audio content) for use in presenting a sample of the media clips for selecting the level of the playback device target media. When communicating these landmarks and associated metadata from a first device to a media player (for example, target) the second media player, the target media player is able to use the corresponding points and metadata to retrieve the relevant clips, and submit the clips. Because the transfer of landmarks involves only a very small amount of data, not the data that represents the actual media clips, the transfer from the first media player to the second media player occurs very quickly. The user who receives the reference points on their target media playback device can choose whether or not to play the media clips, and in some cases, select the content source from which to access the shared content. Consequently, in an example embodiment, the transfer of data representing the actual media clips only occurs if the user receives, with whom the media clips are being shared, choose to play the media clips. This is in contrast to media playback devices that transfer the actual data representing the media clips from a first media playback device to a second media playback device, regardless of whether the user receives any desire to play the
[0019] In Figure 3, the combined media clips are represented graphically by the rectangle with the reference number 62. As described in detail below, in some example modalities the target device uses the pairs of reference points to extract the clips from relevant media from a stream of digital media content. For example, in some example modalities, digital content is processed on the target device (for example, the television converter that received the reference points and related metadata), in such a way that the relevant media clips are extracted from the content digital, as defined by the reference points. In the other example modalities, the reference point pairs can be communicated to a content source, and processed in the content source, in such a way that only the relevant data representing the media clips defined by the reference point pairs are communicated from the content source to the target media player. Advantageously, by communicating from the reference points to the content source, the content source does not need to transmit the entire selection of digital content, but instead can transmit only the media clips extracted from the selected digital media content according to reference points, thus preserving bandwidth.
[0020] According to some example modalities, when a user's media player is presenting a selection of digital media content, the user manipulates one or more control mechanisms (for example, buttons) to establish pairs of reference that define a media clip. The user interface for the media player makes it easy to select multiple media clips, which can be concatenated in an order selected by the user. With some settings, the user may be able to define the transitions between tracks, scene or clip - such as sound or various visual effects. In addition, with some modalities, the user who generated the media clips can select one or more sources from which the content can be accessed, so that a content source identifier is communicated to the target media player of each content source. from which the content can be accessed. This allows the user to receive who the clips were shared with to select a content source of their choice. If, for example, the receiving user subscribes to a particular content source, the user can select to receive that content source for access to the shared media clips.
[0021] Figure 4 illustrates an implementation of a digital content distribution system 69, according to an example modality. As illustrated in Figure 4, the digital content distribution system 69 includes a first media player 70, a second (meta) media player 72 and a media content source 74. For the purposes of this invention, a target media player is simply a device that the user has selected to share one or more media clips.
[0022] In the example illustrated in Figure 4, the first media playback device 70 receives a stream of digital content, and presents the digital media content to a user of the first device 70, which can be a television converter, a computer desktop, a laptop, a compressed computer, a cell phone, a personal media player, or any other similar device for consumer digital content. In the various sample modalities, the flow of digital media content can originate from any number and type of content sources. For example, the content source can be a satellite broadcast, a cable broadcast, an audio or video on demand source, a network-based computer source, a local storage device, and so on. In any case, as the user of the first media player listens, and / or sees the flow of digital content, the user causes the pairs of points to be generated. For example, the user can press a button or buttons on a remote control device to generate the reference point pairs. The media playback device 70 includes storage 76, where reference points and corresponding metadata are generated and stored. For example, in some modalities, as a user manipulates a control mechanism (for example, a button) a reference point processing module that resides in the reproduction device will automatically generate the reference point pairs and the corresponding metadata.
[0023] After the generation of one or more pairs of landmarks and corresponding metadata, the user may wish to share the media clips defined by the landmarks. Thus, the user can interact with a graphical user interface, facilitated by the media player 70, which allows the user to select another user (for example, the target user), or another media player (for example, device target), for which the pair of landmarks must be reported. For example, the user can select a person from a list of friends filled with users who are part of their own party, or third parties, of the social network. Alternatively, a user can simply enter an email address, phone number, username, or some other means of identifying others to identify the person with whom the content is to be shared. Once a target user or target media player has been selected or unidentified, media player 70 communicates the pair of landmarks to the target user or target device 72.
[0024] In some example embodiments, the media playback device may be a device similar to the media playback device in which reference points are generated, to include, but not be limited to: a television converter, a desktop computer, laptop, tablet computer, cell phone, personal media player, or any other similar device for consumer digital content. When the target media player 72 receives the pair of landmarks and corresponding metadata, the landmarks and corresponding metadata pairs are processed and presented in a graphical user interface, allowing a user of the media device to destination select the respective media clips to play. For example, in some example modalities, a title and / or short descriptions of the media clips can be presented, for selection by a user. In some example modalities, a thumbnail image and / or short preview may be available, allowing the user to receive the preview of the clip or media clips before requesting the return of the actual playback of the clip or clips. In some example modalities, when the pair of landmarks and associated metadata are received, the target media player automatically initiates a content request for the content. In some example embodiments, the content request is communicated to a standard media content source 74. Alternatively, the content request can be communicated to a media content source 74 indicated in the landmarks and / or metadata. Alternatively, the target media playback device 72 may use a content source selection algorithm to select a content source from a variety of content sources. For example, the target media playback device 72 may have a locally stored copy of the content from which the media clips were generated. In this case, media clips can be generated and presented from the pair of landmarks without the need to request content from a remote source. Thus, with some modalities, the media playback device can first determine whether a local copy of the digital media content selection is available from which the media clips can be generated. Only if a local copy is not available will the target media player request that the user select a content source, or, alternatively, automatically request the content from a standard content source.
[0025] As illustrated in Figure 4, in some example modalities, when the target media player 72 communicates a content request 78 to the content source 74, the content request includes a copy of the pair or pairs of points that was initially communicated from the playback media device 70 to the target media playback device 72. As such, and as illustrated in Figure 4, in some example embodiments the media content source 74 is capable of processing the landmarks included in a content request to generate the media clips defined by the landmarks, in such a way that only the data representing the actual media clips (for example, cut media content 80) is communicated from from the content source 74 to the target media player 72. When the cut media content 80 is received at the target media player 72, it is stored for later reproduction, or immediately presented (for example, reproduced back) to a user.
[0026] In some alternative example modalities, the content request communicated from the target media playback device 72 to the content source 74 will only include a content identifier to identify the selection of digital media content from which media clips are being extracted. For example, the pair of landmarks or pairs that define the actual media clips cannot be communicated to the source media content 74 in the content request. Thus, content source 74 will communicate the entire selection of digital media content to the target media player 72. If, for example, the selection of digital media content represents a film, the entire film is communicated from from content source 74 to target media player 72. When target media player 72 receives media content, target media player 72 will process media content and landmarks pair to generate the media clips defined by the pair of landmarks. Once the media clips are generated, they are presented (for example, played back) to the user.
[0027] Figure 5 illustrates an example of a functional block diagram of a media playback device 90, according to an example modality. As illustrated in Figure 5, the media playback device 90 is shown by way of example, and includes a media stream receiving module 92, a reference point processing module 94, a command processing module 96, and a graphical user interface module 98. In addition, the media player includes an audio / video recording interface 91 and a communications module 95. The media player 90 can be a television converter, a personal computer (desktop, workstation, laptop or cell phone), a personal media player, a mobile phone (for example, smart phone), a compressed computing device, or similar device. According to some example modalities, the media stream module 92 receives a stream of digital media content from a content source. In various examples, the media stream reception modules 92 can receive content from one or more sources selected from a wide variety of sources. For example, the media stream receiving module 92 can receive streaming media content from a conventional over-the-air television program, a satellite broadcast, a data network (for example, a conventional computer network based on Or wide area network (WAN) mobile phone). In some example modalities, the source of the content cannot be external, but it can also be a fixed disc, or in some example modalities, another machine-readable medium, such as a DVD, Blu-Ray disc, Compact Disc, or flash memory device.
[0028] As illustrated in Figure 5, the media playback device 90 includes a reference point processing module 94 that consists of a reference point definition module 100 and a media clip generator module 102. In some For example, the reference point definition module 100 operates in conjunction with command module processing 96 to generate the points that define a media clip. For example, in some embodiments, the command processing module 96 receives a signal from a touchscreen device (not shown), or a remote control device, directing the media playback device 90 to generate a reference point (or entry point, or exit point). When the command processing module 96 receives and processes such a command, the reference point definition module 100 generates a reference point that corresponds with the time position of the media content being presented by the media playback device 90. In some example modalities, the reference point definition module 100 will analyze one or more data packages of the content that is to be presented in order to identify the displacement of the content currently presented in relation to the start of the selection of digital content. Therefore, in some example embodiments, the timing information that is included in a reference point is derived from the analysis of the timing information present in the data packets that make up the streaming media content. However, in some example embodiments, reference point definition module 100 may include a timing mechanism to generate a time shift. In such an implementation, the time information included in the reference points will be generated based on the analysis of time information that is external to, or included in, the data packages that make up the selection of digital media content. In some modalities, the analysis involved in the generation of reference points may be in consideration of the specific version of the digital media content being presented. For example, if a version of the content is from a television program, timing analysis can compensate for television commercials, and so on. In addition to analyzing, extracting and / or generating timing information for reference points, the reference point definition module 100 can also extract or generate certain metadata that is either inserted into the reference points, or stored separately in association with the points generated reference points.
[0029] In some example modalities, the media clip generator module 102 reads the existing reference points, and corresponding metadata, to generate media clips to be presented through the media playback device 90. For example, in some embodiments, the media playback device 90 can receive one or more pairs of landmarks and associated metadata from a remote media player device. The media clip generator module of 102 processes received pairs of reference points and metadata to generate the media clips defined by the reference point pairs. In some embodiments, generating the media clips involves extracting a selection of digital media content, the specific part of the media content that is defined by the landmarks. In some alternative example modalities, media clips can be generated by a remote source source content, such as a web-based content source, or an audio or video command source. According to some modalities, the graphical user interface element may display a selection of content sources from which a user chooses or selects a particular source. Thus, a request for content will be directed to the selected source of content.
[0030] In some example modalities, the media reproduction device 90 has a graphical user interface module 98, which facilitates the presentation of one or more UI elements that allow a user, in some cases, to generate reference points and select media clips for playback. For example, in some embodiments, a menu-based graphical interface may feature buttons on a touchscreen device, allowing a user to press the buttons displayed on the touchscreen device and generate points that define media clips for content that is being displayed on the screen. Likewise, in some modalities, the GUI may provide a mechanism for the presentation of several sets of media clips that have been received from various sources. For example, when multiple people have shared different media clips, the GUI provides a mechanism by which a user can select a particular media clip, or set of media clips, to be played. In addition, the graphical interface can provide a series of information on the other screen, such as a channel selection function, volume selection function, content source selection and content guide.
[0031] As shown in Figure 5, the media playback device 90 includes an audio / video recording interface 91, which can facilitate audio and / or video recording through an audio / video capture device connected externally, such as a microphone, web camera, video camera, or similar device. In some example embodiments, the media playback device may include an integrated, integrated audio / video recording device (now shown). The audio / video recording device (whether internal or external) can be used to capture a personal video message, which can be communicated together with a pair of landmarks, or set of landmarks, and associated metadata. Therefore, a user can record an introductory audio / video message explaining the meaning of the various media clips that a user has shared. The user interface can facilitate the recording of such personal video messages.
[0032] The 98 graphical user interface module can also allow a user to concatenate multiple media clips (defined by reference points) in an order determined by the user, and with transitions and special effects selected by the user. For example, a user can rearrange the order of the various media clips by manipulating elements of the graphical user interface representing the various media clips defined by the reference point pairs. Likewise, the user can select from a variety of preset transition effects that will be displayed at the transition point between any two media clips.
[0033] Those skilled in the art will easily recognize that the functions that have been assigned here for certain modules can, in fact, be provided by different modules. Likewise, several of the modules and their respective functions as described herein, can be combined in certain exemplary modalities, without departing from the scope and spirit of the invention. In addition, a variety of additional modules that are not explicitly described in the context of the example shown in Figure 5 may also be present in certain implementations of a media player consistent with the example modalities.
[0034] Figure 6 illustrates an example method, according to one modality, for sharing one or more media clips. The method of Figure 6 begins with the operation of method 110, when a command is received, directing a media player to generate a reference point. For example, method of operation 110 generally occurs when presenting digital media content on a device to a user. The reference point is generated to mark the point in the content being presented where a user would like to start, or the end of a media clip. As such, during playback of digital media content, a user can repeat operation method 110 a number of times to generate any number of reference point pairs for defining the media clips. In addition, as described above, the particular input mechanism used to invoke the command to generate the reference point may vary depending on the particular application of the media player. In some embodiments, a remote control device is used to signal a television converter to establish reference points. However, in alternative modalities, one or more control mechanisms integrated with the media playback device (for example, virtual keys displayed on a touchscreen of a compressed computing device) can facilitate the invocation of commands for the establishment of landmarks.
[0035] Then, the method 112 operation, the reference point pair or pairs and corresponding metadata that define the clip or clips generated in operation 110 are communicated from the media playback device (in which they were generated) to another target media playback device. In general, communication of landmarks and corresponding metadata occurs in response to receiving a command generated by the user or policy, requesting that landmarks and metadata be communicated to a particular person or device. As indicated above, the exact communication mechanism may vary depending on the application. In some example modalities, communication of landmarks and corresponding metadata is via a computer-based network using conventional network protocols. In some example modalities, landmarks and corresponding metadata can be sent by email, or communicated via a messaging protocol (for example, short message systems (SMS)). In the other example modalities, the communication mechanism may involve short-range network technology, such as Bluetooth, NFC or infrared.
[0036] In operations method 114, reference points and corresponding metadata are received at the target media player. In some example embodiments, when the target media player receives the reference points and metadata, the target media player simply stores the reference points, such that the media clip or clips defined by the reference points can be presented as a user-selectable option for the target media player. In such a scenario, media clips cannot be generated until a user has selected the media clips for the presentation. In other example modalities, landmarks and metadata can be pre-processed in order to prefetch all matching media clips, such that the corresponding media clips with the landmarks are present in local storage on the target media playback when the user selects to play the media clip or clips.
[0037] After the reference points and corresponding metadata have been received by the target media playback device in the method 114 operation, in the method 116 operation a content source containing the media content associated with the media clips is identified . For example, in the case of prefetch content, the target media playback device can first assess whether digital media content is locally accessible, and only if digital media content is not locally accessible to access media content digital from a remote source. With some modalities, a content identifier will be sufficient to identify the source of the content and selection of the content from which the media clips are to be generated. However, in some embodiments, a content source identifier can be used to determine the content source from which the request for content is identified by the content identifier. In some example embodiments, a default content source is automatically selected. For example, the target media player may be associated with a proprietary content distribution system so that the target media player will always try to access content from the source of the same content. Alternatively, landmarks and / or metadata can identify a source of content where the content can be accessed. In some example modalities, a source content selection algorithm can be used to select a content source from many available content sources. With some modalities, when a user is asked to select a content source to access the content, the user can be presented with information about prices, indicating the cost associated with accessing digital content. In some cases, the price information may reflect an amount adapted to the size of the data that is to be requested, while in other cases, a flat fee may be requested, regardless of the size of the media clip being requested. In any case, after the content source is selected, in operation of method 118 a request for content is communicated to the selected content source.
[0038] As indicated in Figure 6, in some example modalities, the request for content that is directed to the selected or identified content source includes the reference points that were received by the target media player in the operation of method 114. In addition In addition, the content identifier can be communicated to the content source. Therefore, the content source receives the landmarks with the content request and processes the requested content to generate the media clips, based on the information included in the landmarks. In this way, the content source only needs to serve or transmit the data that represents the actual media clips, and not select the entire digital content. Therefore, the method 120 operation, after communicating the request for content, the target media playback device receives the requested content - in this case, the media clips defined by the landmarks received in operation 114. Finally, the operation of method 122, the target media player presents or plays the media clips. If retrieving the media clips from the content source was part of a pre-stress operation, then the media clips are presented in operation 122 in response to a user who requests the media clips to play. Alternatively, if the retrieval of the media clips was in response to a user request prior to playing the media clips, the media clips are presented in the method 122 operation as they are received they are displayed on a display, or on, on the media player. Likewise, in the case of audio, audio clips will be played through a speaker connected to the target media player.
[0039] In an alternative method for sharing media clips, the request for content directed to the identified or selected content source does not include the reference points received, for example, in the operation of the method114. Instead, the content request communicated from the target media player to the content source simply identifies the selection of digital media content from which the media clips are to be generated. In response to receiving the content request, the content source serves for identified selection of digital media content. When digital media content is received at the target media playback device, the target media playback device processes the receipt of digital media content to generate the media clips based on the information at the reference points. Once generated, media clips can be immediately displayed or stored until requested.
[0040] The various operations of example methods described here can be performed, at least partially, by one or more processors that are temporarily configured (for example, by the software) or permanently configured to perform the relevant operations. Temporarily or permanently configured, processors can represent implemented processor modules that operate to perform one or more operations or functions. Therefore, the modules cited here may, in some example modalities, comprise modules implemented by the processor.
[0041] Likewise, the methods described here can be at least partially implemented by the processor. For example, at least some of the operations of a method can be performed by one or more processors or modules implemented by processors. The performance of some of the operations can be distributed between one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processor or processors may be located in a single location (for example, within a home environment, an office environment or on a server farm), while in other embodiments the processors may be distributed through a number of locations.
[0042] The processors of one or more can also function to support the performance of the relevant operations in a "cloud computing" environment or as a service, for example, as in the context of "software as a service" (SaaS). For example, at least some of the operations can be performed by a group of computers (as examples of machines, including processors), these operations being accessible via a network (for example, the Internet) and through one or more appropriate interfaces ( for example, Program Application Interfaces (APIs).)
[0043] Figure 7 is a block diagram of a machine, in the form of a computer system in which a set of instructions, for making the machine execute any one or more of the methodologies discussed here, which can be executed. In some embodiments, the machine functions as an independent device or can be connected (for example, network) to other machines. In a network deployment, the machine can operate at the capacity of a server or a client machine in network client-server environments, or as a peer-to-peer (or distributed) network environment. The machine can be a personal computer (PC), a compressed PC, a server, a television converter (STB), a Personal Digital Assistant (PDA), a cell phone, a web device, a network router, switch or bridge, or any machine capable of execution instructions (sequential or not) that specify the actions to be taken by that machine. In addition, while only a single machine is illustrated, the term "Machine" should also be interpreted as including any collection of machines that, individually or together, execute a set (or several sets) of instructions to perform any or more of the methodologies here discussed.
[0044] Example computer system 200 includes a processor 202 (for example, a central processing unit (CPU), a graphics processing unit (GPU) or both), a main memory 201 and a static memory 206, which communicate with each other via a bus 208. Computer system 200 may further include a display unit 210, an alphanumeric input device (e.g., a keyboard), and a navigation device user interface (UI) 214 ( for example, a mouse). In an example embodiment, the display of the input device, and a cursor control device is a touchscreen. Computer system 200 may additionally include a storage device (for example, drive unit 216), a signal generating device 218 (for example, a speaker), a network interface device 220, and one or more sensors, such as a global system positioning sensor, compass accelerometer, or other sensor.
[0045] The drive unit 216 includes an optical reading medium 222 in which one or more sets of instructions and data structures (for example, software 223) are stored which are incorporated or used by any one or more of the methods or functions described here. The software may also reside 223, completely or at least partially, inside main memory 201 and / or inside processor 202 during execution by computer system 200, main memory 204 and processor 202 are also readable media per machine.
[0046] While the machine-readable medium 222 is illustrated in an example embodiment to be a single medium, the term "machine-readable medium" can include a single medium or several means (for example, a centralized or distributed database , and / or associated caches and servers) that store one or more instructions. The term "machine-readable medium" should also be interpreted as including any material support that is capable of storing, encoding or loading instructions for execution by the machine and that causes the machine to perform any one or more of the methods of the present invention, or which is capable of storing, coding or loading data structures used by or associated with such instructions. The term "machine-readable medium" should therefore be taken to include, but is not limited to, machine-readable medium memories include non-volatile memory, including by means of semiconductor memory devices, for example, for example , EPROM, EEPROM, and flash memory devices; magnetic disks, such as internal hard drives and removable disks; magnetic optical discs and CD-ROM and DVD-ROM discs.
[0047] Software 223 can further be transmitted and received over a communications network 226 using a transmission medium via the network device interface 220, using any of a number of well-known transfer protocols (e.g., HTTP ). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet, mobile phone networks, old plan phone (POTS) network, and wireless data networks. (for example, Wi-Fi ® and WiMax ® networks). The "transmission medium" must be taken to include any intangible medium that is capable of storing, transporting or encoding instructions for execution by the machine, and includes digital or analog communication signals or other intangible means to facilitate the communication of such software.
[0048] Figure 8 is a schematic representation of an example interactive television environment within which certain aspects of the subject matter of the invention described here can be implanted. The interactive television environment 310 includes a source system 312, which communicates data (for example, digital media content selections, television content data and / or interactive application data) over a network or system. distribution 314 and a modulating box 370 for a system receiver 316. According to some example modalities, the source system 312 can process landmarks received from a media playback device, to select and concatenate media clips for the communication to and from presentation on another media player. In an example embodiment, the interactive television environment 310 optionally includes a storage unit 372 (for example, a personal computer) that communicates data stored over a network 374 to modulator box 370 which, in turn, communicates data stored content, television content data, and interactive application data for the system receiver 316. Modulator box 370, storage unit 372, and system receiver 316 are typically co-located at a subscriber's home. Thus, in an example embodiment, the modulating box 370 can combine the television content data and interactive application data received from the remote source system 312 with local stored data provided by the storage unit 372 provided at the subscriber's home.
[0049] Returning first to the origin system312, an example head-end system 318 operates to communicate the data as a broadcast transmission. To this end, head-end system 318 is shown to include one or more streaming servers 320 and, optionally, one or more application servers 322. Each of the streaming servers 320 can operate to receive, encode, pack, multiplex, modular, and transmit data from multiple sources and types. Although the example embodiment is described here as transmitting data from the headend system 318 as a transmission, it will be appreciated that the relevant data can also be unicast or multicast from the source system 312 through the distribution system 314 and modulating box 370 for the receiving system 316. In various embodiments, data can also be transmitted from the source system 312 via a network connection to the receiving system 316. Consistent with some modalities, the content can be received via a cable network, the satellite transmission network, or data network (for example, such as the Internet), or a combination of these. Further details on an example of broadcast server 320 are provided below with reference to Figure 9.
[0050] Each 322 application server can compile and provide interactive data modules for the broadcast server 320. Interactive data modules can also include data that is used by an interactive television application. A 322 application server may also include multiplexing functionality, to allow for multiplexing, for example, interactive television and data applications associated with audio and video signals received from various sources. An application server 322 may also have the capability to feed (e.g., stream) multiple interactive television applications to one or more broadcast servers 320 for distribution to system receiver 316. To this end, each application server 322 may implement a so-called "carousel", in which the code modules and data are supplied to a transmission server 320 in a repetitive cyclic manner for inclusion within a transmission from the headend system 318.
[0051] The headend system 318 is also shown by way of example, to include one or more network background servers 324, which are coupled to application servers 322 and to a modem pool 326. Specifically, the modem pool 326 is coupled to receive data from receiver systems 316 over a network 328 (for example, the Internet) and to provide this data to network back servers 324. Network back servers 324 can then , providing the data, received from the receiving system 316, to application servers 322 and broadcast servers 320. Therefore, network 328 and modem pool 326 can operate as a return channel through which a receiver of system 316 is provided with interactivity with the source system 312. The data provided to the headend system 318 via the return channel may include, for example, merely the user input for an executive interactive television application used in the receiving system 316 or data that is generated by the receiving system 316 and communicated to the source system 312. Return channel 330 can also provide a channel through which programs, advertisements / commercials, and applications from the source system 312 are supplied to the receiving system 316.
[0052] Within the source system 312, the head-end system 318 is also shown, optionally, to receive data (for example, content, code and application data) from external sources. For example, Figure 8 illustrates the headend system 318 as being coupled to one or more content sources 332 and one or more application sources 334 over a network 336 (e.g., the Internet). For example, a 332 content source could be an entertainment content provider (for example, movies), a dynamic real-time data provider (for example, weather information), a plurality of targeted ads, time display ads noble, or the like. A 334 application source can be a provider of any interactive television application. For example, one or more application sources 34 may provide a TV media application player, Electronic Program Guide (EPG) and navigation applications, messaging and communication applications, information applications, sports applications, or games and gaming apps.
[0053] Now returning to the example distribution system 314, the distribution system 314 can, in one embodiment, support the distribution of data transmission from the source system 312 to the receiving system 316. As shown, the network or distribution system 314 may comprise a satellite, cable, terrestrial, or digital subscriber line (DSL) network, or any other data communication network or combination of such networks.
[0054] The receiver system 316 is shown, in an example embodiment, to include a converter for television (STB) 338 that receives data through the distribution system 310 and the modulating box 370, and a modem 340 for channel communications. return with the head-end system 318. The receiver system 316 is also shown to include other external optional systems, such as a 343 user input device (for example, a keyboard, remote control, mouse, etc.) and a display device 342, together with the television converter 338, for displaying content received on the television converter 338. In an example embodiment, the display device 342 can be a television set.
[0055] The converter for television 338 can run three layers of software, that is, an operating system 344, middleware 346 and, optionally, one or more interactive television applications 348. Middleware 346 can operate to protect the interactive television application 348 of differences of various operating systems 544 and hardware differences of different varieties of the converter for television 338. To this end, the 346 middleware can provide Unit Application Program Interfaces (APIs) and a library to translate the instructions received from from an interactive television or storing application data 48 in low-level commands that can be understood by the converter's hardware for television (for example, modems, interface ports, smart card readers, etc.).
[0056] Modulator box 370, in an example mode, receives stored data 598 (see Figure 9, below) from storage unit 372 and a broadcast transmission from source system 312. Modulator box 370 multiplexes the stored data 398 for the broadcast transmission thereby generating a second transmission which is communicated to the receiving system 316. It will, however, be appreciated that the functionality of the storage unit is optional. The storage unit 372 can store data and, on request, communicate the stored data to modulator box 370 over network 374 (e.g., Ethernet). The storage unit 372 can communicate the stored data in response to commands that are entered by a user of the television converter 338 and communicated to the storage unit 372 over link 376.
[0057] Figure 9 is a block diagram providing architectural details about a broadcast server, a modulation box, a converter for television and an optional storage device according to an example modality of the subject of the invention. Specifically, Figure 9 shows a broadcast server 420, which can support a module carousel, as including a number of parallel paths that provide input to a multiplexer 450, each of the parallel paths including an encoder 452 and a packer 454 Each 452 encoder can operate to receive input from one or more sources. For example, encoder 452a is shown to receive application modules transmitted from application server 422 which, in turn, is coupled to receive application data from one or more application sources 434. Application source 434 it can be internal or external to a headend system 318. Likewise, an encoder 452b is shown coupled to receive content data from one or more sources of content 432, which can also be internal or external to the head system 318 network.
[0058] One skilled in the art will appreciate that each broadcast server 420 can include any number of parallel paths coupled with any number of sources (for example, application sources, or content 434 and 432) that provide input to the multiplexer 450. In addition, a headend system 318 can implement any number of broadcast servers 420.
[0059] Each of the 452 encoders operates to encode the data using any one or more of a number of compression algorithms, such as, for example, compression algorithms of the Motion Picture Expert Group (MPEG). Each of the 452 encoders can also operate with the stamp time data for synchronization purposes. One skilled in the art will further understand that data types may not be susceptible to encoding and may thus pass through, or bypass, encoder 452, and be supplied to a packer 454 in an unencrypted state. In an example embodiment, packers 454 can be coupled to receive both encrypted and non-encrypted data and to format these data in packets, before eventual transmission through the distribution system 414 (for example, a broadcast channel).
[0060] Each of the packers 454 supplies the packets to the multiplexer 450, which multiplexes the packets in a transmission that is modulated by a 451 modulator. The 451 modulator can use a modulation technique before the broadcast transmission distribution through the system 414 distribution module. For example, the 451 modulator can use a quadrature phase shift modulation (QPSK) technique, which is a digital frequency modulation technique that is used for communicating data over coaxial cable installations of network or quadrature amplitude modulation (QAM) technique, which is a digital amplitude modulation technique, which is used for data communication over wireless network installations.
[0061] The modulator box 470, in an example embodiment, includes a demodulator 478, a multiplexer 480, a modulator 482, a packer 484, and a computer system 487. Demodulator 478 receives and demodulates the broadcast transmission that, in turn, it is communicated to multiplexer 480 which, in turn, is communicated to modulator 482 which modulates, using a modulation technique, as described above, and communicates a transmission to the 438 television converter. The 487 computer system can execute applications of the modulator 486 which includes a communication module 488. The communication module 488 can receive data modules from the storage unit 472, the data modules, including the stored data 498 in the form of application data and content data . The application data includes executable applications that can be run by a 464 computer system on the 438 television converter. The data includes alphanumeric data, image, video and audio content and can be displayed on the display device 442 connected to the converter for television 438. The packer 484 packages the data modules into packets and communicates the packets to the multiplexer 480 which multiplexes the packet stream containing the stored data 498 together with the various packet streams in the broadcast transmission to form a plurality of flows in the form of an example of a transmission.
[0062] The storage unit 472 (for example, a personal computer) includes a computer system 490, a storage device 494, and an encoder 492. The computer system 490 can run applications 491 (for example, an operating system , word processing, etc.), which may include a media player storage device that receives and processes commands that are entered by a user operating the 438 television converter. The media player device storage can receive a command from a user requesting stored data 498 in the example of a file that resides in a database 496 on storage device 494. In response to receiving the command, the storage of the media player may direct the drive 472 to communicate the requested file in the form of module data to the modulating box 470, which in turn communicates the data module to the converter for television 438. Encoder 492 operates to encode data using any or more than a number of compression algorithms, such as, for example, Motion Picture Expert Group (MPEG) compression algorithms. The 492 encoder can also operate the data in stamp time for synchronization purposes. It will be appreciated that certain types of data cannot be susceptible to encoding and can thus pass through, or bypass, encoder 492, and be supplied to a modulating box 470 in an uncoded state.
[0063] The converter for television 538 of the example system receiver 16 can be coupled to a multiplexer box 470 that is coupled to a network input (for example, a modem), cable input, satellite dish or antenna in order to receive the broadcast transmission, transmitted from the headend system 418 through the distribution system 414. The broadcast transmission can be fed to the modulating box 470 which produces a transmission which is then transmitted to an input 456 (for example , a receiver, the port, etc.) on the television converter 438. Whenever input 456 comprises a receiver, input 456 may, for example, include a tuner (not shown), which operates to select a channel on which the transmission is communicated. The packaged transmission is then fed from input 456 to a demultiplexer 458 that demultiplexes the application and the content data that constitutes the transmission signal. For example, the demultiplexer 458 can deliver content data to a 560 audio and video decoder, and application data to a 464 computer system. The 460 video and audio decoder decodes the data content into, for example, example, a television signal. For example, the 460 audio and video decoder can decode the received content data into a suitable television signal, such as an NTSC, PAL, or HDTV signal. The television signal is then supplied from the audio and video decoder 460 to the display device 442.
[0064] The computer system 464, which may include a processor and memory, which reconstitutes one or more interactive television applications (for example, from the source system 412) and one or more applications of stored data (for example, from storage unit 472) from application data provided to it by demultiplexer 458. Application data can include either application code or application information that is used by a 448 application. The 464 computer system, in addition to rebuild a request 448, it runs such an application 448 to cause the converter for television 438 to perform one or more operations. For example, the computer of the system 487 can generate a signal to the display device 442. For example, this signal from the computer system 464 can constitute an image or graphical user interface (GUI) to be superimposed on a produced image. as a result of the signal supplied to the display device 442 from the audio and video decoder 460. The user input device 443 (for example, a keyboard, remote control, mouse, microphone, camera, etc.) it is also shown to be coupled to input 456, in order to allow a user to provide input to the converter for television 438. Such input can, for example, be the alphanumeric, audio, video, or control input (for example, manipulation of objects presented in a user interface).
[0065] The computer system 464 is also shown to be coupled to the audio input and video decoder 460, in order to allow the computer system 564 to control this 460 decoder. The computer system 464 can also receive a audio or video signal from the decoder and 560 combine this signal with the generated signals in order to allow the system464 computer to provide a combined signal to the display device 442.
[0066] The computer system 464 is also shown, by way of example, to be coupled to an output 466 (for example, a transmitter, the output port, etc.) through which the television converter 438 is capable of providing output data, via return channel 430, to an external system, such as, for example, head-end system 448. For this purpose, output 466 is shown to be coupled to modem 440 of receiver system 416 .
[0067] While the receiver system 416 is shown in Figures 8 and 9 to comprise a television converter 438 coupled to a display device 442, the components of the receiver system 416 could be combined into a single device (for example, a system computer), or can be distributed among a number of independent systems. For example, a system 416 separate from the receiver can contribute to a set-top box 438, which is then coupled to a display device 442.
[0068] Although a modality has been described with reference to specific example modalities, it will be evident that various modifications and alterations can be made to these modalities without departing from the spirit and the broader scope of the invention. Thus, the specification and drawings should be considered in an illustrative rather than a restrictive sense. The attached drawings that form a part of it, show by way of illustration, and not of limitation, the specific modalities in which the subject can be practiced. The illustrated modalities are described in sufficient detail to allow those skilled in the art to practice the teachings described here.
权利要求:
Claims (13)
[0001]
1. Method for sharing digital media content characterized by the fact that it comprises: in a first media player, receiving from a second media player a first pair of landmarks, a second pair of reference points reference and a content identifier, the first and second pairs of reference points each comprising a first reference point defining a start point of a video clip and a second reference point defining the end point of the video clip, the content identifier identifying a selection of digital media content to which the reference point pair refers and identifying a source of content from which content can be requested; on the first media player, receiving metadata information from a second media player, the metadata information including an indication to concatenate media clips defined by the first and second pair of waypoints; display a plurality of content sources from which the content identified by the content identifier can be requested, the plurality of content sources presented in such a way as to indicate the content source identified by the content identifier; receiving a selection by the user of a content source from the plurality of content sources displayed; communicate the pair of landmarks, the content identifier and the metadata for a content source corresponding to the selection by the user together with a request for content; receiving from the content source data representing a concatenation of at least two media clips extracted from the selection of digital media content identified by the content identifier according to the first and second pairs of landmarks; and present the media clip on the first media player.
[0002]
2. Method according to claim 1, characterized by the fact that it additionally comprises: receiving from the second media reproduction device a content source identifier identifying a content source from which the content identified by the content identifier content can be requested, where the content source to which the content request is communicated is the content source identified by the content source identifier.
[0003]
3. Method, according to claim 1, characterized by the fact that communicating the first and second pairs of landmarks and the content identifier to a content source together with a content request occurs after determining that the content selection digital media identified by the content identifier is not locally accessible.
[0004]
4. Method, according to claim 1, characterized by the fact that after receiving the first and second pairs of landmarks and the content identifier, display a notification indicating that a media clip has been shared, the notification including identifying information a person who shared the media clip.
[0005]
5. Method, according to claim 8, characterized by the fact that it also comprises: in addition to the first and second pairs of reference points and the content identifier, receiving a generated media clip from the second media playback device with an audio or video capture device from the second media player and an indication that the media clip generated with the audio or video capture device is related to the first and second pairs of landmarks and the content identifier.
[0006]
6. Method characterized by the fact that it comprises: on a first media player, receiving from a second media player a first pair of landmarks, a second pair of landmarks and a content identifier , the first and second pairs of waypoints each comprising a first waypoint that defines a start point for a media clip and a second waypoint that defines the end point of the media clip, the content identifier identifying a selection of digital media content to which the pair of landmarks relate; on the first media player, receiving metadata information from the second media player, the metadata information including an indication to concatenate media clips defined by the first and second reference points; use a content source selection algorithm to determine a content source from which to request the content identified by the content identifier, the content source selection algorithm to determine that the content identified by the identifier is not stored locally in the first media player, and when making the determination that the content is not locally stored, communicate the pair of landmarks, the content identifier and the metadata to a standard content source with a content request; receive from the standard content source data representing a concatenation of at least two media clips extracted from the selection of digital media content, identified by the content identifier according to the first and second pairs of landmarks and present the media clip on the first media player.
[0007]
7. Method according to claim 6, characterized by the fact that it further comprises: receiving from the second media reproducing device a content source identifier identifying a content source from which the content identified by the content identifier content can be requested, where the default content source to which the content request is communicated is the content source identified by the content source identifier.
[0008]
8. Method, according to claim 6, characterized by the fact that upon receipt of the pair of landmarks and the content identifier, displays a notification indicating that a media clip has been shared, the notification, including identification information of a person who shared the media clip.
[0009]
9. Method, according to claim 6, characterized by the fact that it comprises: in addition to the first and second pairs of reference points and the content identifier, receiving from the second media reproduction device a media clip generated with an audio or video capture device from the second media player.
[0010]
10. Method, according to claim 6, characterized by the fact that the metadata information still includes information of transitions to be provided between the media clips in the concatenation of at least two clips.
[0011]
11. Method characterized by the fact that it comprises: on a first media player, receiving from a second media player a pair of reference points, metadata information from the second media player, and a content identifier, the pair of landmarks consisting of a first landmark that defines a start point for a media clip and a second landmark that defines the end point of the media clip, the content identifier identifying a selecting digital media content to which the pair of landmarks relate and identifying a source of content from which content can be requested, metadata information including an indication to concatenate media clips defined by the pair of landmarks reference; receiving a user selection of a content source from a plurality of displayed content sources; determine that the selection of digital content identified by the content source is stored and accessible locally on the first media player; access the locally stored selection of digital media content identified by the content identifier; generate a media clip by extracting a portion of the digital media content selection defined by the pair of landmarks; and present the media clip on the first media player.
[0012]
12. Method, according to claim 11, characterized by the fact that after receiving the pair of landmarks, the metadata information and the content identifier, display a notification indicating that a media clip has been shared, the notification, including identifying information about a person who shared the media clip.
[0013]
13. Method, according to claim 11, characterized by the fact that it further comprises: in addition to the pair of landmarks and the content identifier, receiving from the second media playback device a media clip generated with a device capture audio or video from the second media player.
类似技术:
公开号 | 公开日 | 专利标题
BR112012005499B1|2020-12-01|METHOD FOR SHARING DIGITAL MEDIA CONTENT
US11081143B2|2021-08-03|Providing enhanced content
US10958865B2|2021-03-23|Data segment service
ES2470976T3|2014-06-24|Method and system to control the recording and playback of interactive applications
ES2423220T3|2013-09-18|Systems and methods for creating custom video mosaic pages with local content
US20190313148A1|2019-10-10|Methods and apparatuses for combining and distributing user enhanced video/audio content
BRPI0109666B1|2018-02-14|"INTERACTIVE MEDIA SYSTEM AND METHOD FOR USING AN INTERACTIVE MEDIA APPLICATION TO REPLACE PAUSE TIME VIDEO CONTENT IN THE PLACE OF PAUSED MEDIA"
CA2753044A1|2012-04-12|Video assets having associated graphical descriptor data
JP2013021430A|2013-01-31|Thumbnail image provision apparatus, method and system
同族专利:
公开号 | 公开日
US20110072078A1|2011-03-24|
EP2476040A1|2012-07-18|
US10313411B2|2019-06-04|
US20160359935A1|2016-12-08|
EP2476040A4|2013-06-19|
WO2011031994A1|2011-03-17|
JP2013504838A|2013-02-07|
KR101632464B1|2016-07-01|
CN102782609B|2016-07-20|
BR112012005499A2|2016-04-19|
US8606848B2|2013-12-10|
RU2577468C2|2016-03-20|
RU2012112355A|2013-10-20|
JP5798559B2|2015-10-21|
AU2010292131B2|2014-10-30|
JP2016026424A|2016-02-12|
JP6150442B2|2017-06-21|
CN102782609A|2012-11-14|
US20140075040A1|2014-03-13|
US11102262B2|2021-08-24|
US9385913B2|2016-07-05|
US20180248928A1|2018-08-30|
AU2010292131A1|2012-04-05|
KR20120090059A|2012-08-16|
US20220014578A1|2022-01-13|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US6199076B1|1996-10-02|2001-03-06|James Logan|Audio program player including a dynamic program selection controller|
US20030093790A1|2000-03-28|2003-05-15|Logan James D.|Audio and video program recording, editing and playback systems using metadata|
US7055166B1|1996-10-03|2006-05-30|Gotuit Media Corp.|Apparatus and methods for broadcast monitoring|
CN1248504C|1997-01-29|2006-03-29|数字广告及销售有限公司|Method of transferring media files over communications network|
US6421726B1|1997-03-14|2002-07-16|Akamai Technologies, Inc.|System and method for selection and retrieval of diverse types of video data on a computer network|
US6226672B1|1997-05-02|2001-05-01|Sony Corporation|Method and system for allowing users to access and/or share media libraries, including multimedia collections of audio and video information via a wide area network|
US7689898B2|1998-05-07|2010-03-30|Astute Technology, Llc|Enhanced capture, management and distribution of live presentations|
US20020120925A1|2000-03-28|2002-08-29|Logan James D.|Audio and video program recording, editing and playback systems using metadata|
US7934251B2|1999-12-02|2011-04-26|Western Digital Technologies, Inc.|Managed peer-to-peer applications, systems and methods for distributed data access and storage|
US20050005308A1|2002-01-29|2005-01-06|Gotuit Video, Inc.|Methods and apparatus for recording and replaying sports broadcasts|
US7096416B1|2000-10-30|2006-08-22|Autovod|Methods and apparatuses for synchronizing mixed-media data files|
US6965683B2|2000-12-21|2005-11-15|Digimarc Corporation|Routing networks for use with watermark systems|
WO2002062009A1|2001-01-30|2002-08-08|Digimarc Corporation|Efficient interactive tv|
JP4496690B2|2001-09-07|2010-07-07|日本電信電話株式会社|VIDEO INFORMATION RECOMMENDATION SYSTEM, METHOD, AND DEVICE, VIDEO INFORMATION RECOMMENDATION PROGRAM, AND PROGRAM RECORDING MEDIUM|
JP2003168051A|2001-11-30|2003-06-13|Ricoh Co Ltd|System and method for providing electronic catalog, program thereof and recording medium with the program recorded thereon|
AU2003225071A1|2002-05-03|2003-11-17|Trudell Medical International|Aerosol medication delivery apparatus with narrow orifice|
US6952697B1|2002-06-21|2005-10-04|Trust Licensing, Llc|Media validation system|
US7823058B2|2002-12-30|2010-10-26|The Board Of Trustees Of The Leland Stanford Junior University|Methods and apparatus for interactive point-of-view authoring of digital video content|
US7424202B2|2003-07-29|2008-09-09|Sony Corporation|Editing system and control method using a readout request|
JP2005191892A|2003-12-25|2005-07-14|Sharp Corp|Information acquisition device and multi-media information preparation system using it|
US7975062B2|2004-06-07|2011-07-05|Sling Media, Inc.|Capturing and sharing media content|
JP4340205B2|2004-08-27|2009-10-07|株式会社ケンウッド|Content playback device|
US8065604B2|2004-12-30|2011-11-22|Massachusetts Institute Of Technology|Techniques for relating arbitrary metadata to media files|
US7769819B2|2005-04-20|2010-08-03|Videoegg, Inc.|Video editing with timeline representations|
US7809802B2|2005-04-20|2010-10-05|Videoegg, Inc.|Browser based video editing|
US7720259B2|2005-08-26|2010-05-18|Sony Corporation|Motion capture using primary and secondary markers|
US8051130B2|2006-02-18|2011-11-01|Logan James D|Methods and apparatus for creating, combining, distributing and reproducing program content for groups of participating users|
JP2007234090A|2006-02-28|2007-09-13|Yamaha Corp|Information reproducing device and program|
US8015491B2|2006-02-28|2011-09-06|Maven Networks, Inc.|Systems and methods for a single development tool of unified online and offline content providing a similar viewing experience|
US20070250874A1|2006-03-23|2007-10-25|Sbc Knowledge Ventures, Lp|System and method of indexing video content|
EP1999674A4|2006-03-28|2010-10-06|Hewlett Packard Development Co|System and method for enabling social browsing of networked time-based media|
WO2008016634A2|2006-08-02|2008-02-07|Tellytopia, Inc.|System, device, and method for delivering multimedia|
US8413184B2|2006-08-09|2013-04-02|Apple Inc.|Media map for capture of content from random access devices|
US8458775B2|2006-08-11|2013-06-04|Microsoft Corporation|Multiuser web service sign-in client side components|
US20080086743A1|2006-10-06|2008-04-10|Infovalue Computing, Inc.|Enhanced personal video recorder|
US20080178242A1|2006-12-05|2008-07-24|Crackle, Inc.|Video sharing platform providing for downloading of content to portable devices|
US8438603B2|2006-12-22|2013-05-07|Time Warner Cable Inc.|Methods and apparatus for supporting content distribution|
US9569587B2|2006-12-29|2017-02-14|Kip Prod Pi Lp|Multi-services application gateway and system employing the same|
WO2008085204A2|2006-12-29|2008-07-17|Prodea Systems, Inc.|Demarcation between application service provider and user in multi-services gateway device at user premises|
US9826197B2|2007-01-12|2017-11-21|Activevideo Networks, Inc.|Providing television broadcasts over a managed network and interactive content over an unmanaged network to a client device|
EP1947869A1|2007-01-18|2008-07-23|Research In Motion Limited|System and method for establishing reliable data connectivity with a network node by a user equipment device|
US9743120B2|2007-04-30|2017-08-22|Yahoo Holdings, Inc.|System and method for video conversations|
US20080313541A1|2007-06-14|2008-12-18|Yahoo! Inc.|Method and system for personalized segmentation and indexing of media|
US20090019492A1|2007-07-11|2009-01-15|United Video Properties, Inc.|Systems and methods for mirroring and transcoding media content|
EP2174310A4|2007-07-16|2013-08-21|Cernium Corp|Apparatus and methods for video alarm verification|
WO2009022708A1|2007-08-14|2009-02-19|Nippon Hoso Kyokai|Video distribution device and video distribution program|
US20090049495A1|2007-08-14|2009-02-19|Ketterling Lori K|Method and apparatus for creating, managing, sending and tracking video presentations with emails|
EP2206114A4|2007-09-28|2012-07-11|Gracenote Inc|Synthesizing a presentation of a multimedia event|
US8560634B2|2007-10-17|2013-10-15|Dispersive Networks, Inc.|Apparatus, systems and methods utilizing dispersive networking|
JP5088550B2|2007-10-26|2012-12-05|ソニー株式会社|Electronic device, reproduction method and program|
US8925001B2|2008-09-12|2014-12-30|At&T Intellectual Property I, L.P.|Media stream generation based on a category of user expression|
US9865302B1|2008-12-15|2018-01-09|Tata Communications Inc.|Virtual video editing|
US20120041982A1|2008-12-16|2012-02-16|Kota Enterprises, Llc|Method and system for associating co-presence information with a media item|
US9215255B2|2008-12-18|2015-12-15|Google Technology Holdings LLC|Transfer method and apparatus for seamless content transfer|
US20100242074A1|2009-03-23|2010-09-23|Tandberg Television Inc.|Video sharing communities in a cable system|
US8606848B2|2009-09-10|2013-12-10|Opentv, Inc.|Method and system for sharing digital media content|
US9417691B2|2010-03-26|2016-08-16|Nokia Technologies Oy|Method and apparatus for ad-hoc peer-to-peer augmented reality environment|
US9043488B2|2010-03-29|2015-05-26|Damaka, Inc.|System and method for session sweeping between devices|
US8611540B2|2010-06-23|2013-12-17|Damaka, Inc.|System and method for secure messaging in a hybrid peer-to-peer network|
CN103139139B|2011-11-22|2015-11-25|华为技术有限公司|The method and apparatus of business migration between subscriber equipment|
EP2786624B1|2011-12-02|2020-08-12|Nokia Technologies Oy|Method and apparatus for sharing a communication among wireless devices|
US20150206349A1|2012-08-22|2015-07-23|Goldrun Corporation|Augmented reality virtual content platform apparatuses, methods and systems|US8606848B2|2009-09-10|2013-12-10|Opentv, Inc.|Method and system for sharing digital media content|
US8422643B2|2009-10-29|2013-04-16|Cisco Technology, Inc.|Playback of media recordings|
US20110238626A1|2010-03-24|2011-09-29|Verizon Patent And Licensing, Inc.|Automatic user device backup|
US20110282944A1|2010-05-13|2011-11-17|Broadvision|Systems and methods for content sharing across enterprise social networks|
US8473991B2|2010-08-23|2013-06-25|Verizon Patent And Licensing Inc.|Automatic mobile image diary backup and display|
US8438233B2|2011-03-23|2013-05-07|Color Labs, Inc.|Storage and distribution of content for a user device group|
US9971743B2|2012-05-17|2018-05-15|Next Issue Media|Content generation and transmission with user-directed restructuring|
US9971739B2|2012-05-17|2018-05-15|Next Issue Media|Content generation with analytics|
US9971738B2|2012-05-17|2018-05-15|Next Issue Media|Content generation with restructuring|
US10164979B2|2012-05-17|2018-12-25|Apple Inc.|Multi-source content generation|
US8977964B2|2011-05-17|2015-03-10|Next Issue Media|Media content device, system and method|
US8978149B2|2011-05-17|2015-03-10|Next Issue Media|Media content device, system and method|
US9971744B2|2012-05-17|2018-05-15|Next Issue Media|Content generation and restructuring with provider access|
US10225354B2|2011-06-06|2019-03-05|Mitel Networks Corporation|Proximity session mobility|
US20120324024A1|2011-06-14|2012-12-20|Milestone Project Inc.|System and method for controlling and synchronizing interfaces remotely|
US9819710B2|2011-08-25|2017-11-14|Logitech Europe S.A.|Easy sharing of wireless audio signals|
US8327012B1|2011-09-21|2012-12-04|Color Labs, Inc|Content sharing via multiple content distribution servers|
CN102769715B|2012-03-31|2017-12-29|新奥特(北京)视频技术有限公司|A kind of method and system of code stream editing|
WO2013173626A2|2012-05-18|2013-11-21|Clipfile Corporation|Using content|
PL2712913T3|2012-09-28|2017-01-31|The Procter And Gamble Company|External structuring system for liquid laundry detergent composition|
PL2712914T5|2012-09-28|2018-04-30|The Procter And Gamble Company|Process to prepare an external structuring system for liquid laundry detergent composition|
US9753924B2|2012-10-09|2017-09-05|Google Inc.|Selection of clips for sharing streaming content|
TWI505699B|2012-11-23|2015-10-21|Inst Information Industry|Method for transferring data stream|
EP2972776A4|2013-03-13|2016-08-17|Synacor Inc|Content and service aggregation, management and presentation system|
US9882945B2|2013-03-14|2018-01-30|Synacor, Inc.|Media sharing communications system|
US20140279889A1|2013-03-14|2014-09-18|Aliphcom|Intelligent device connection for wireless media ecosystem|
US20150095679A1|2013-09-30|2015-04-02|Sonos, Inc.|Transitioning A Networked Playback Device Between Operating Modes|
US9615145B2|2013-10-18|2017-04-04|Sony Corporation|Apparatus and methods for providing interactive extras|
CN103647761B|2013-11-28|2017-04-12|小米科技有限责任公司|Method and device for marking audio record, and terminal, server and system|
WO2016007377A1|2014-07-11|2016-01-14|mindHIVE Inc.|Communication with component-based privacy|
CN105528344B|2014-09-28|2018-12-21|阿里巴巴集团控股有限公司|Determine the method and device that the affiliated media information of data is read in storage equipment|
EP3018913B1|2014-11-10|2018-10-03|Nxp B.V.|Media player|
KR102338850B1|2014-12-02|2021-12-13|삼성전자주식회사|An operating method for sharing content in a home network and system thereof|
JP6120018B2|2015-01-22|2017-04-26|トヨタ自動車株式会社|Tire pressure monitoring device|
US9955193B1|2015-02-27|2018-04-24|Google Llc|Identifying transitions within media content items|
US9811763B2|2016-01-14|2017-11-07|Social Studios Ltd.|Methods and systems for building a media clip|
US11096234B2|2016-10-11|2021-08-17|Arris Enterprises Llc|Establishing media device control based on wireless device proximity|
WO2018175966A1|2017-03-23|2018-09-27|Next Issue Media|Generation and presentation of media content|
WO2020081877A1|2018-10-18|2020-04-23|Ha Nguyen|Ultrasonic messaging in mixed reality|
法律状态:
2019-01-08| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]|
2019-07-23| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
2020-09-01| B09A| Decision: intention to grant [chapter 9.1 patent gazette]|
2020-12-01| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 10 (DEZ) ANOS CONTADOS A PARTIR DE 01/12/2020, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
申请号 | 申请日 | 专利标题
US24127609P| true| 2009-09-10|2009-09-10|
US61/241,276|2009-09-10|
US12/878,901|2010-09-09|
US12/878,901|US8606848B2|2009-09-10|2010-09-09|Method and system for sharing digital media content|
PCT/US2010/048462|WO2011031994A1|2009-09-10|2010-09-10|Method and system for sharing digital media content|
[返回顶部]